Goals for which Less Wrong does (and doesn't) help

post by AnnaSalamon · 2010-11-18T22:37:36.984Z · LW · GW · Legacy · 105 comments

Contents

  When you do (and don’t) need epistemic rationality
None
105 comments

Related to: Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality

We’ve had a lot of good criticism of Less Wrong lately (including Patri’s post above, which contains a number of useful points). But to prevent those posts from confusing newcomers, this may be a good time to review what Less Wrong is useful for.

In particular: I had a conversation last Sunday with a fellow, I’ll call him Jim, who was trying to choose a career that would let him “help shape the singularity (or simply the future of humanity) in a positive way”.  He was trying to sort out what was efficient, and he aimed to be careful to have goals and not roles.  

So far, excellent news, right?  A thoughtful, capable person is trying to sort out how, exactly, to have the best impact on humanity’s future.  Whatever your views on the existential risks landscape, it’s clear humanity could use more people like that.

The part that concerned me was that Jim had put a site-blocker on LW (as well as all of his blogs) after reading Patri’s post, which, he said, had “hit him like a load of bricks”.  Jim wanted to get his act together and really help the world, not diddle around reading shiny-fun blog comments.  But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.  And they were the sort of errors LW could have helped with.  And there was no obvious force in his off-line, focused, productive life of a sort that could similarly help.

So, in case it’s useful to others, a review of what LW is useful for.

When you do (and don’t) need epistemic rationality

For some tasks, the world provides rich, inexpensive empirical feedback.  In these tasks you hardly need reasoning.  Just try the task many ways, steal from the best role-models you can find, and take care to notice what is and isn’t giving you results.

Thus, if you want to learn to sculpt, reading Less Wrong is a bad way to go about it.  Better to find some clay and a hands-on sculpting course.  The situation is similar for small talk, cooking, selling, programming, and many other useful skills.

Unfortunately, most of us also have goals for which we can obtain no such ready success/failure data. For example, if you want to know whether cryonics is a good buy, you can’t just try buying it and not-buying it and see which works better.  If you miss your first bet, you’re out for good.

There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments *really will* be safe bets for the next forty years.  For these goals we are forced to use reasoning, as failure-prone as human reasoning is.  If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.

The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak, or else to focus one's discussion on the most similar-seeming subject for which it *is* easy to gather empirical data (and so to, for example, rate charities as "good" when they have a low percentage of overhead, instead of a high impact). Insofar as we are stuck caring about such goals and betting our actions on various routes for their achievement, this is not much help.[2]

How to develop epistemic rationality

If you want to develop epistemic rationality, it helps to spend time with the best epistemic rationalists you can find.  For many, although not all, this will mean Less Wrong.  Read the sequences.  Read the top current conversations.  Put your own thinking out there (in the discussion section, for starters) so that others can help you find mistakes in your thinking, and so that you can get used to holding your own thinking to high standards.  Find or build an in-person community of aspiring rationalists if you can.

Is it useful to try to read every single comment?  Probably not, on the margin; better to read textbooks or to do rationality exercises yourself.  But reading the Sequences helped many of us quite a bit; and epistemic rationality is the sort of thing for which sitting around reading (even reading things that are shiny-fun) can actually help.

 


[1]  To be specific: Jim was considering personally "raising awareness" about the virtues of the free market, in the hopes that this would (indirectly) boost economic growth in the third world, which would enable more people to be educated, which would enable more people to help aim for a positive human future and an eventual positive singularity.

There are several difficulties with this plan.  For one thing, it's complicated; in order to work, his awareness raising would need to indeed boost free market enthusiasm AND US citizens' free market enthusiasm would need to indeed increase the use of free markets in the third world AND this result would need to indeed boost welfare and education in those countries AND a world in which more people could think about humanity's future would need to indeed result in a better future. Conjunctions are unlikely, and this route didn't sound like the most direct path to Jim's stated goal.

For another thing, there are good general arguments suggesting that it is often better to donate than to work directly in a given field, and that, given the many orders of magnitude differences in efficacy between different sorts of philanthropy, it's worth doing considerable research into how best to give.  (Although to be fair, Jim's emailing me was such research, and he may well have appreciated that point.) 

The biggest reason it seemed Jim would benefit from LW was just manner; Jim seemed smart and well-meaning, but more verbally jumbled, and less good at factoring complex questions into distinct, analyzable pieces, than I would expect if he spent longer around LW.

[2] The traditional rationalist reply would be that if human reasoning is completely and permanently hopeless when divorced from the simple empirical tests of Popperian science, then avoiding such "unscientific" subjects is all we can do.

105 comments

Comments sorted by top scores.

comment by teageegeepea · 2010-11-19T04:12:37.038Z · LW(p) · GW(p)

It seems to me that most of the "raise awareness" campaigns are for things people are plenty aware of already.

Replies from: JGWeissman, John_Maxwell_IV
comment by JGWeissman · 2010-11-19T04:24:50.416Z · LW(p) · GW(p)

Indeed. Maybe we should instead call them "signaling awareness" campaigns. I wonder how many people would still be interested in participating.

comment by John_Maxwell (John_Maxwell_IV) · 2012-08-05T00:16:00.242Z · LW(p) · GW(p)

But you're only talking about those campaigns that you're already aware of! There may be lots of unknown "raise awareness" campaigns that will find the awareness of their issues boosted if they work hard enough.

comment by Vladimir_Golovin · 2010-11-19T10:45:53.346Z · LW(p) · GW(p)

But his discussion of how to “really help the world” seemed to me to contain a number of errors[1] -- errors enough that, if he cannot sort them out somehow, his total impact won’t be nearly what it could be.

Idea: a Rationalist Counsel, basically a group of high-profile rationalists that help people who are dedicated themselves to "shaping the future of humanity in a positive way" accurately assess their abilities and offer good strategies to leverage these abilities in order to maximize the impact.

People would privately submit their resumes to the Counsel members, who will evaluate them and offer a personal strategy that, in their opinion, would maximize the impact.

Replies from: Kaj_Sotala, CarlShulman, belkarx, JamesAndrix
comment by Kaj_Sotala · 2010-11-20T10:22:35.359Z · LW(p) · GW(p)

The proper name for this organization, of course, is the Bayesian Conspiracy.

comment by CarlShulman · 2010-11-19T19:05:37.657Z · LW(p) · GW(p)

Already exists in embryonic form: http://www.xrisknetwork.com/

comment by belkarx · 2023-10-05T18:01:21.197Z · LW(p) · GW(p)

Doesn’t 80k hours do this?

comment by JamesAndrix · 2010-11-19T13:33:34.823Z · LW(p) · GW(p)

Upvoted because this shouldn't be at -1

comment by billswift · 2010-11-19T03:45:35.049Z · LW(p) · GW(p)

Conjunctions are not inherently unlikely; they are less likely than their conjuncts considered separately, but could easily be much more likely than a different argument.

Replies from: shokwave
comment by shokwave · 2010-11-19T15:10:45.404Z · LW(p) · GW(p)

Well, yes. But they are less likely than their conjuncts in a specific and mathematical way, and we have good evidence that people don't multiply their uncertainties the way they should - it appears that they simply take the average (!!!).

Charitably, I count eight conjunctions in the presented argument. If he had on average 80% confidence in each premise (raising awareness of the free market virtues will overcome status quo bias, increase in free market in first world will translate to increase in free market in the third world - these don't feel like four-in-five-timers), then his plan, as stated, has at most a 17% chance of success. But Jim feels like he has an 80% chance.

Your response is true in a trivial way, because 17% is far higher than the chance Zeus returns, and far higher again than Zeus and Jesus returning to give each other a cosmic high-five. But we can spot those very unlikely premises - and it's only the very unlikely premises that are less likely than a long list of conjunctions. We don't think like that - we don't see our true chances.

So, if you restrict the space of premises and arguments to what humans mostly deal with in their practical lives, "conjunctions are inherently unlikely" is an excellent rule of thumb until you can sit down and do the math.

Replies from: billswift
comment by billswift · 2010-11-19T18:06:38.562Z · LW(p) · GW(p)

What you write is true. But I have seen people go the other way - hear about some problem (such as the Conjunction Fallacy), then start over-compensating for it (for example, by always rating conjunctions as lower probability). Since the post as written wasn't entirely clear about the limits, I was just pointing out that automatically down-rating conjunctions is not always advisable.

I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.

Replies from: shokwave
comment by shokwave · 2010-11-20T06:18:56.950Z · LW(p) · GW(p)

I never had any problems remembering to multiply the probabilities once it was pointed out, partly because I had already had experience at calculating complicated reliability problems, which are structurally almost identical.

That is a good grounding for applying the conjunction fallacy! Even half a second spent deciding whether your argument is 'reliable' according to methods you have for estimating reliability might stop you from motivated cognition in the direction of "my argument is right". Makes me wonder what other real-life problems have a similar enough structure to common biases to help with instrumental rationality.

comment by JGWeissman · 2010-11-19T02:05:19.062Z · LW(p) · GW(p)

Programming does indeed have aspects in which you get lots of immediate feed back so that you can be successful with minimal rationality. But in many cases, getting that feed back is itself a skill that requires abstract reasoning (What corner cases should I be testing?). Other aspects of programming in which feedback can be slow or expensive include: How maintainable is this code? Will other programmers understand how to use my library? How will this program perform on large problems? How could an unauthorized user abuse this program?

comment by Craig_Heldreth · 2010-11-21T15:19:06.895Z · LW(p) · GW(p)

Consider the following goals:

  1. to be smarter (i.e. more rational);
  2. to appear smarter;
  3. to feel smarter.

And construct a simple Venn diagram. Carefully working through all of Donald Knuth The Art of Computer Programming or Landau & Lifshitz Course of Theoretical Physics (10 volumes) would be a task right smack dab in the center of such a Venn diagram passing the acid test of items one through three on this list.

What generates the criticism of LessWrong as "shiny"?

I think many humans are susceptible to a trap of expending energy on #2 or on #3 and pretending they are working on #1. LessWrong could be seen as "shiny" if it is a particularly attractive trap of this nature. Hence, caution may be in order. The sequences are long and may be looked at as quality entertainment that is no substitute for Knuth or Landau, Lifshitz.

It is much higher quality entertainment than Grant Morrison comic books. If you were going to put that LessWrong time into Grant Morrison, then reading LessWrong is pure win. If it is distracting you from Knuth or Landau, Lifshitz, then perhaps that is where the "shiny" criticism comes from.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-11-24T08:28:33.595Z · LW(p) · GW(p)

It doesn't seem obvious to me that reading about solved problems in computer programming and theoretical physics would develop one's rationality faster than reading a blog that purportedly develops rationality techniques.

And addition to the time costs of an activity there are also willpower costs. If my primary goal is to chill out in the evening after a day of school and software development, reading a difficult textbook may not help me achieve that goal. Skimming Less Wrong may help me achieve that goal and also gain me side benefits.

Replies from: Johnicholas, Craig_Heldreth
comment by Johnicholas · 2010-11-24T15:44:11.144Z · LW(p) · GW(p)

"Carefully working through" is much different than "reading".

Most of the time spent "carefully working through" is spent solving problems - both the ones inline in the text, and additional ones that you spontaneously think of when doing that kind of work.

Replies from: Nisan
comment by Nisan · 2010-11-24T17:46:39.964Z · LW(p) · GW(p)

Indeed. Just reading The Art of Computer Programming would be a pointless task. Incidentally, this wisdom is so obvious in academia that when people say "I read X" they really mean something like "I took notes and worked out most of the exercises". When they want to convey that they just read the text, they say "I looked at X". This language definitely misled me when I was an undergrad.

comment by Craig_Heldreth · 2010-11-24T14:03:07.331Z · LW(p) · GW(p)

I do not disagree with you at all. My point is not that the criticism of LessWrong as "shiny" is accurate, merely giving my take on the viewpoint this criticism comes from.

comment by XiXiDu · 2010-11-19T11:40:13.503Z · LW(p) · GW(p)

The traditional alternative is to deem subjects on which one cannot gather empirical data "unscientific" subjects on which respectable people should not speak...

There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under. Regarding retirement investment, just ask your parents or grandparents. Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant poor. If you claim that predictions and falsifiability are unrelated concepts, that's fine. But to believe some predictions - e.g. a technological Singularity spawned by AGI-seeds capable of superhuman recursive self-improvement - compared to other predictions - e.g. a retirement plan for old age - is not the same.

"I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified." Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do? This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

The basic problem here is that the necessary education to follow Less Wrong will not only teach me to be wary of the arguments on Less Wrong but will also preclude me to act on the suggestions. How so? The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong. Consequentially this means that someone like me should care to earn enough money to support friendly AI research and to buy a Cryonics contract. But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.

Replies from: AnnaSalamon, AnnaSalamon, wedrifid, wedrifid, wedrifid
comment by AnnaSalamon · 2010-11-19T12:51:44.524Z · LW(p) · GW(p)

There are some distinctions to be made here. Cryonics obviously provides a better chance to see the future after dying than rotting six feet under.

Yes, but it is less obvious that the chance is large enough to be worth the money.

Regarding retirement investment, just ask your parents or grandparents.

My example was that the type of investments that can be relied upon in coming decades is not accessible that way. For example, many in the US trusted their savings in real estate, which had been trustworthy for generations. And then it wasn’t.

Yet this argument against the necessity of empirical data breaks down at some point. Shaping the Singularity is not on par with having a positive impact on the distant poor. If you claim that predictions and falsifiability are unrelated concepts, that's fine. But to believe some predictions - e.g. a technological Singularity spawned by AGI-seeds capable of superhuman recursive self-improvement - compared to other predictions - e.g. a retirement plan for old age - is not the same.

Yes, there is a difference of degree between the difficulty of figuring out whether mortage-backed securities were as trustworthy as people thought (note that this was less obvious beforehand) and the difficulty of thinking non-nonsensically about the impacts of AI on our future. Nonetheless, they both seem sufficiently difficult that their practice is helped by the explicit study of rationality (so that e.g. heuristics and biases, and probability theory, are explicitly studied by many in finance).

Replies from: XiXiDu
comment by XiXiDu · 2010-11-19T16:40:23.465Z · LW(p) · GW(p)

What I said was rather meant to show that there are obvious reasons for which you might want to care about your retirement plan. What's very different is to predict that you shouldn't care about your retirement plan because either we'll be killed by superhuman AGI or join utopia as immortals. Less Wrong seems to be focused on the predictive nature of probability theory and to take ideas serious. I don't think that this is a good approach for any but the most intelligent and educated individuals and organisations. The traditional approach to rely on empirical data and the judgement of experts is to be favored for most people in my opinion.

Replies from: wedrifid
comment by wedrifid · 2010-11-19T18:17:04.678Z · LW(p) · GW(p)

Less Wrong seems to be focused on the predictive nature of probability theory and to take ideas serious. I don't think that this is a good approach for any but the most intelligent and educated individuals and organisations. The traditional approach to rely on empirical data and the judgement of experts is to be favored for most people in my opinion.

It is interesting to note that the latter is actually a form of the former, and a particularly strong one at that! In fact, the reasoning that you are using here is using the predictive nature of probability theory.

(That said, I agree that for most people biting the bullet when it comes to their abstract cognitions would be disastrous. Explotions, martyrs and deaths by stoning would abound. That and people would actually act on terrible dating advice rather than their instincts.)

comment by AnnaSalamon · 2010-11-19T13:04:16.262Z · LW(p) · GW(p)

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do? This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

You should interpret the above quote as an example of a single statement made by a single person. The claim is not that you should read Less Wrong so that you can believe every single statement Eliezer ever made. The claim is that you should interact with the best aspiring rationalists you can find (which for many will mean Less Wrong, although there are plenty of others out there too) so that you can learn to think carefully yourself. That is, so that you can learn to consider issues one by one, to apply the Power of Positivist Thinking, to apply the same standards to claims you like and to claims you dislike, etc.

The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong.

Even if you’re correct that there’s wide agreement on those two points, it doesn’t follow that those two points are the most useful thing Less Wrong can give you. Less Wrong can not only give you claims labeled “true”, but can also improve your skill at deciding which claims are likely to be true.

But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.

In my experience, a relatively small amount of study (reading through the sequences and some similar material, and practicing the relevant subskills in in-person or online conversations) can significantly boost many folks’ epistemic rationality. This amount of study is compatible with earning a reasonable sum.

I’m a bit confused about what you’re trying to do in this comment. Are you curious, and honestly trying to untangle the issues and consider the evidence for and against each claim? Or what project are you engaged in?

Replies from: XiXiDu
comment by XiXiDu · 2010-11-19T14:31:51.695Z · LW(p) · GW(p)

The claim is that you should interact with the best aspiring rationalists you can find.

And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this but did provide no evidence for outsiders to follow through on it. The only exceptional and novel thesis to be found on LW concerns decision theory which is not only buried but which one is not able to judge if it is actually valuable as long as one doesn't have a previous education. The only exceptional and novel belief (prediction) on here is that regarding the risks posed by AGI. As with the former, one is unable to judge any claims as long as one does not read the sequences (as it is claimed). But why would one do so in the first place? Outsiders are unable to judge the credence of this movement except by what their members say about it. This is my problem when I try to introduce people to Less Wrong. They don't see why it is special! They skim over some posts and there's nothing new there. You have to differentiate Less Wrong from other sources of epistemic rationality. What novel concepts are to be found here, what can you learn from Less Wrong that you might not already know or that you won't come across elsewhere?

I’m a bit confused about what you’re trying to do in this comment. Are you curious, and honestly trying to untangle the issues and consider the evidence for and against each claim? Or what project are you engaged in?

Your post gives the impression that Less Wrong can provide great insights for personal self-improvement. That is indisputable, but those who it might help won't read it anyway or won't be able to understand it. I just doubt that people like you learn much from it. What have you learnt from Less Wrong, how did it improve your life? I have no formal education but what I've so far read of LW does not seem very impressive in the sense that there was nothing to disagree with me so that I could update my beliefs and improve my decisions. I haven't come across any post that gave me some feeling of great insight, most of it was either obvious or I figured it out myself before (much less formally of course). The most important idea associated with Less Wrong seems to be that concerning friendly AI. What's special about LW is the strong commitment here regarding that topic. That's why I'm constantly picking on it. And the best arguments for why Less Wrong hasn't helped to improve peoples perception about the topic of AI in some cases is that they are intellectual impotent. So if you are not arguing that they should give up, but rather learn more, then I ask how if not through LW, which obviously failed.

To give a summary. Everyone interested to consider reading the sequences won't be able to spot much that he/she doesn't already know or that seems unique. The people who Less Wrong would help the most do not have the necessary education to understand it. And the most important conclusion, that one should care about AGI safety, is insufficiently differentiated from the huge amount of writings concerned with marginal issues about rationality.

Replies from: None, AnnaSalamon, Perplexed, Eneasz, PhilGoetz
comment by [deleted] · 2010-11-19T16:49:58.428Z · LW(p) · GW(p)

Let me put down a few of my own thoughts on the subject.

I think it's odd that LessWrong spends so much time pondering whether or not it should exist! Most blogs don't do that; most communities (online or otherwise) don't do that. And if a person did that, you'd consider her rather abnormal. I view such discussion as noise; or at least a sidebar to more interesting topics.

I disagree that LW can't be useful to anyone who'd understand it. I offer my own experience as an example: it was useful to me in several ways.

  1. LW clinched my own break with religion (particularly the essay "Belief in Belief." )

  2. Eliezer's explanation of quantum physics is very interesting, intuitive, and as far as I know isn't replicated in any textbook.

  3. LW introduced me to futurist topics that I simply hadn't heard of, or realized that sensible people thought about (cryonics, the Singularity).

  4. I met a few real-life friends through LW, for whom I have a lot of respect.

  5. Finally, as far as instrumental rationality goes, LW took the place of two other, lower-quality internet forums in my free-time budget, so I spend more time out of my day trying to be thoughtful, rather than sleazy and goofy.

A couple of common topics on LW aren't all that interesting to me. Productivity/time management advice just strikes me as a bit of a guilt trip, which I can come up with by myself, thank you very much. I don't like models for how the mind works that aren't based in anything empirical -- I mistrust that sort of thing. (Not that professional researchers don't do the same thing!) I'm not a fan of the periodic gender wars and the oops-someone-mentioned-politics-and-we-all-went-crazy catastrophes. And I have a pet peeve with the local convention of using rather colorless language and speaking in very general terms and second-guessing themselves and each other all the time.

But apart from all that, it's a pretty damn good forum, and it does teach people new things.

Replies from: XiXiDu, John_Maxwell_IV
comment by XiXiDu · 2010-11-19T19:02:25.365Z · LW(p) · GW(p)

Your recent post is a good example. A friendly math post! I gave up after reading "...within-cluster sum of squared differences..." :-)

Your points are really surprising. I do not interact with educated people in meatspace at all. I didn't think that someone who reached your level of education needed LW to break with religion. And I've always been the kind of person to take futuristic topics seriously by default, so that was no surprise at all to me. I guess that is why people here are irritated when I argue that science fiction authors talk about many topics discussed here for a long time. I don't see how the fictional exploration of concepts could lower the credence of the subjects.

Nevertheless, I never tried to argue that Less Wrong is useless. It's one of my favorite places in the metaverse.

Replies from: wedrifid, None, wedrifid
comment by wedrifid · 2010-11-19T19:13:40.313Z · LW(p) · GW(p)

I didn't think that someone who reached your level of education needed LW to break with religion.

People certainly ought not need to. By that I mean that people with the general cognitive capacity of humans have more than enough ability to evaluate religion as nonsense given even basic modern education. But even so it is damn hard to break free. Part of what makes the 'Belief in Belief' post particularly useful is that it is written in an attempt to understand what is really going on when people 'believe' things that don't make sense given what they know.

The social factors are also important. Religion is essentially about signalling tribal affiliation. It is dangerous to discard a tribal identity without already having found yourself a new tribe - even a compartmentalised online tribe used primarily for cognitive affiliation.

Nevertheless, I never tried to argue that Less Wrong is useless. It's one of my favorite places in the metaverse.

This is something I have to remind myself of when reading your comments. You are sincere. Not having known your online self at all some of your arguments and questions would seem far more rhetorical than you intended them. You actually do update on new information which is a big deal!

comment by [deleted] · 2010-11-19T19:11:34.823Z · LW(p) · GW(p)

I didn't realize I was being unclear in that last post! Clearly it's one of those things that takes practice. (In my defense I really don't know where the median LW reader is at math; the level of that post was a wild guess.)

Glad you're not opposed to LessWrong as a place. I'm not certain myself whether it really fulfills its stated goal of helping people come to conclusions more rationally. (When decisions are actually hard, when empirical evidence is sparse and trial-and-error is impossible, I'm not sure it's possible to decide rationally at all! )

I think one thing it does is promote a norm of measured thinking, where we keep our emotions at a conversational level instead of letting them shout. I've definitely noticed that attitude spilling out into my everyday life, and I find myself checking "do I think that's really plausible or am I just saying it?"

Replies from: XiXiDu, wedrifid
comment by XiXiDu · 2010-11-20T10:01:53.046Z · LW(p) · GW(p)

I didn't realize I was being unclear in that last post!

No, that isn't it. It's just that the math was above my current level of education. It was all Chinese to me! That doesn't mean that I am against advanced math posts. I believe more technical posts would improve Less Wrong a lot. I loved the recent posts by cousin_it. Even though the key issues have been above my head they introduced me to so many new ideas. They gave me this feeling of discovering and learning something new and important. And the discussions they spawned have been of higher standard because nobody of lower education dared to say much. They also spawned awesome comments like this one. Your post is no different, just that I deferred reading it until I learnt the necessary math. Such posts actually give me incentive to learn more.

How to improve Less Wrong:

  • Write more technical posts (including math).
  • Either: Define the demographics. Explicitly mention the level of education necessary for all of Less Wrong.
  • Or: Introduce labels rating the level of difficulty for each post.
  • Provide more background knowledge in each post you write through references and links.

Example:

If P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X).

Someone like me has to look up each of the symbols. It would have been much easier this way: If P(Y|X) 1, then P(XY) ≈ P(X).

  • Advance the FAQ and link to it on the frontpage (When should I write a top-level article?; You must read the sequences before commenting etc.).
  • Be more kind to people who don't know better. Try to link them up and don't explain what's wrong but why and how they are wrong.

I think one thing it does is promote a norm of measured thinking, where we keep our emotions at a conversational level instead of letting them shout.

Yeah, I'm trying hard not to write without thinking. Sometimes I still fail, especially when I'm tired.

Replies from: wnoise
comment by wnoise · 2010-11-20T12:18:52.512Z · LW(p) · GW(p)

Someone like me has to look up each of the symbols. It would have been much easier this way: If P(Y|X) ≈ 1, then P(X∧Y) ≈ P(X).

"If" should not go to Conditional_(Programming), but "Logical Implication", though I don't see the need for a link. It really is just the standard meaning of "if", and if people don't know the meaning of "if", advanced rationality is probably a bit beyond what they can immediately use.

"1" as a link to percentage is odd as well. It's just the number one. Yes, people are often more used to it as 100% in the context of probability, but the link doesn't clarify that in any useful way.

The links for conditional probability and conjunction are great though. It's quite possible to not be familiar with those particular bits of notation.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-20T13:23:18.278Z · LW(p) · GW(p)

I know, but you see what's the issue here. It actually has been a problem all my life. I'm really happy that there are now places like the Khan Academy and BetterExplained that actually explain such matters in a concise and straightforward way, not like school teachers who you never understand. Most of the time I only have to watch/read their explanation once to grasp it. Further they go into details you are never told about in school.

I guess I'm the kind of person who is unable to accept that 1+1=2 until someone explains the terms and operators. I only started with mathematics last year with a previous knowledge of basic arithmetic. Yet the first things I tried to figure out is what '+' actually means. That showed me that infix operators are functions and led me to the recursive and set theoretic definition of addition. Only at that point I have been satisfied. Which reminds me of a problem I had in German lessons back in elementary school. I always insisted to pronounce certain words the way I thought it was the most logical consistent to do, e.g. to pronounce 'st' not as 'sch'. Nobody ever told me that natural language evolved and that it is just an axiomatic definition, a cultural consensus to pronounce it in a certain way, not something you can infer from the general to the specific. So I kept pronouncing it the way I thought it was reasonable and ended up with bad grades. Such problems accumulated and I just stopped doing anything for school (also because I thought the other kids are all aliens). I'm only beginning to catch up for a few years now. English was the first thing I taught myself.

The links for conditional probability and conjunction are great though. It's quite possible to not be familiar with those particular bits of notation.

Hah! You must be one of those people who are only surrounded by educated folks. I don't know anyone in real-life who has any clue what a logical conjunction could be (been working as baker and doing roadworks). Something nasty maybe :-)

comment by wedrifid · 2010-11-19T19:34:40.605Z · LW(p) · GW(p)

I find myself checking "do I think that's really plausible or am I just saying it?"

I find myself checking "I think that's really plausible. That can't be good. I wonder what I should be saying instead to be socially successful." ;)

comment by wedrifid · 2010-11-19T19:42:58.977Z · LW(p) · GW(p)

A friendly math post! I gave up after reading "...within-cluster sum of squared differences..." :-)

It is easy for a math literate person to over-estimate how obvious certain jargon is to people. Like 'sum of squared differences' for example. Squared differences is just what is involved when you are calculating things like standard deviation. It's what you use when looking at, say, a group of people and deciding whether they all have about the same height or if some are really tall but others are really short. How different they are.

For those who have never had to manually calculate the standard deviation and similar statistics the term would just be meaningless. (Which makes your example a good demonstration of your point!)

Replies from: komponisto, Peter_de_Blanc
comment by komponisto · 2010-11-20T02:05:41.540Z · LW(p) · GW(p)

Squared differences is just what is involved when you are calculating things like standard deviation

Never mind that; just parse the damn phrase! All you need to know is what a "difference" is, and what "to square" means.

Why, I wonder, do people assume that words lose their individual meanings when combined, so that something like "squared differences" registers as "[unknown vocabulary item]" rather than "differences that have been squared"?

Replies from: wedrifid, kragensitaker
comment by wedrifid · 2010-11-20T05:34:07.043Z · LW(p) · GW(p)

Why, I wonder, do people assume that words lose their individual meanings when combined, so that something like "squared differences" registers as "[unknown vocabulary item]" rather than "differences that have been squared"?

Because quite often sophisticated people will punish you socially if you don't take special care to pay homage to whatever extra meaning the combined phrase has taken on. Caution in such cases is a practical social move.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-11-20T05:53:24.215Z · LW(p) · GW(p)

Good observation; I had been subliminally aware of it but nobody had ever pointed it out to me explicitly.

comment by kragensitaker · 2010-11-24T13:12:33.760Z · LW(p) · GW(p)

It's also very helpful to know things like why someone might go around squaring differences and then summing them, and what kinds of situations that makes sense in. That way you can tell when you make errors of interpretation. For example, "differences pertaining to the squared" is a plausible but less likely interpretation of "squared differences", but knowing that people commonly square differences and then sum them in order to calculate an L₂ norm, often because they are going to take the derivative of the result so as to solve for a local minimum, makes that a much less plausible interpretation.

And for a Bayesian to be rational in the colloquial sense, they must always remember to assign some substantial probability weight to "other". For example, you can't simply assume that words like "sum" and "differences" are being used with one of the meanings you're familiar with; you must remember that there's always the possibility that you're encountering a new sense of the word.

comment by Peter_de_Blanc · 2010-11-19T19:58:24.109Z · LW(p) · GW(p)

For those who have never had to manually calculate the standard deviation and similar statistics the term would just be meaningless. (Which makes your example a good demonstration of your point!)

Really? I think I would have understood that sentence before the first time I tried to calculate a standard deviation manually. In general, there are many ways to arrive at an understanding of a concept. I'm very skeptical of statements of the form "you can't understand X without doing Y first."

Replies from: wedrifid
comment by wedrifid · 2010-11-19T20:04:42.750Z · LW(p) · GW(p)

I was being polite.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-24T16:30:13.986Z · LW(p) · GW(p)

What do you mean? Are you saying that everyone with an average IQ is supposed to be able to understand what it means to minimize the within-cluster sum of squared differences, regardless of education? I don't know what a standard deviation is either. I am able to read Wikipedia, understand what to do and use it. I know what squared means and I know what differences means. I just expected the sentence to mean more than the sum of its parts. Also I do not call the ability to use tools comprehension. What I value is to know when to use a particular tool, how to use it effectively and how it works.

You could teach stone-age people to drive a car. It would still seem like magic to them. Yet if you cloned them and exposed them to the right circumstance they might actually understand the internal combustion engine once grown up. Same IQ. Same as the server WolframAlpha is running on do possess a certain potential. Yet what enables the potential are the five million lines of Mathematica.

I'd be really surprised if one was able to understand the sentence the first time with a self-taught 1-year educational background in mathematics. That doesn't mean that there are exceptions, I'm not a prodigy.

Replies from: None, wedrifid
comment by [deleted] · 2010-11-26T13:53:39.765Z · LW(p) · GW(p)

I think you're right. "Sum of squared differences" makes sense as a normal thing to do with data points only if you've learned that it's a measure of how spread apart they are, that it's equivalent to the variance, and that making the variance small is a good way to ensure that a cluster is "well clumped." There is a certain amount of intuition that's built up from experience.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2010-11-26T15:45:09.353Z · LW(p) · GW(p)

I also want to stress the point that I'm a bit biased(?) when it comes to understanding concepts. Surely I could accept any mathematical method or algorithm at face value. After all I'm also able to use WolframAlpha. But I feel that doesn't count. At least I do not value such understanding. If you taught a prehistoric man to press some buttons he would be able to control a nuclear facility.

Many people are bothered by the counter-intuitive nature of probability. I have never been more confused by probability than by any other branch of mathematics. I believe that people regard probability as more difficult to understand because they learn about it much later than about other mathematical concepts. For me that is very different because it is all new to me. For me P(Y) ≥ P(X∧(X->Y)) is as (actually more) intuitive than a^2 + b^2 = c^2. The first makes sense in and of itself, the second needs context and proof (at least regarding my gut feeling). I just don't see how 2 + 2 = 4 is more obvious than Bayes' theorem. You just learnt to accept that 2 + 2 = 4 because 1.) you encounter the problem very often 2.) you can easily verify its solution 3.) you learn about it early on. But it is not self-evident.

Replies from: wedrifid
comment by wedrifid · 2010-11-26T15:53:27.689Z · LW(p) · GW(p)

I also want to stress the point that I'm a bit biased(?) when it comes to understanding concepts.

This is something people have noticed and it influences their responses. Aggressive "not understanding" is often considered a sign of bad faith, for good reason.

comment by XiXiDu · 2010-11-26T15:04:46.215Z · LW(p) · GW(p)

What I noticed is that everyone seems to assume that my problem to understand the sentence "...within-cluster sum of squared differences..." was regarding "sum of squared differences" and not "within-cluster". I don't know the definition of the concept of a mathematical cluster. What might add to the confusion is that I'm not even sure about the meaning of the English word "cluster". After that I decided to postpone reading the post. I could take the effort to look everything up of course but thought it would be more effective to read it in future.

Your post simply served as an example of how difficult it can be to read Less Wrong without a lot of background knowledge.

Replies from: wedrifid, None
comment by wedrifid · 2010-11-26T15:44:56.436Z · LW(p) · GW(p)

What I noticed is that everyone seems to assume that my problem to understand the sentence "...within-cluster sum of squared differences..." was regarding "sum of squared differences" and not "within-cluster".

Not really. I actually wrote a basic explanation of the whole sentence concept by concept but trimmed it down to the part that best illustrated dependence on mathematical background. Saying "within cluster is basically a phrase in English that refers to the same thing that's in the title of the post" wouldn't have helped convey the point. :P

It does, however, illustrate a different point. There is a trait related not just to intelligence but also to openness to information and flexible thinking that makes some people more suited than others to picking up and following new topics and ideas based on what they already know and filling in the blanks with their best inference. Confidence is part of it but part of it is social competition strategy embodied at the cognitive level.

comment by [deleted] · 2010-11-26T15:11:34.559Z · LW(p) · GW(p)

There isn't an explicit mathematical concept of a cluster.

Here's what K-means does. Say, K is 3.

You try all the possible ways to partition your data points into three groups. You pick the partition that minimizes the sum of squared differences within each group.
Then you iterate the procedure.

comment by wedrifid · 2010-11-26T12:34:26.433Z · LW(p) · GW(p)

What do you mean? Are you saying that everyone with an average IQ is supposed to be able to understand what it means to minimize the within-cluster sum of squared differences, regardless of education?

No, approximately the opposite of that. Are you sure you didn't intend this to be a reply to Peter? It seems to be quite an odd reply to me in the context.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-26T13:37:40.863Z · LW(p) · GW(p)

You said that you have been polite in what you previously wrote. I parsed that the way that you agree with Peter de Blanc but that you have chosen to communicate this fact in a way that makes it possible to arrive at the conclusion without stating it. In other words, I should have been able to understand the sentence.

I didn't reply to Peter de Blanc because I don't know him and he doesn't know me and so his statement that he would have understood Y without X doesn't give me much information regarding my own intelligence. But you have actually read a lot of my comments and addressed me directly in the discussion above.

Interestingly I'm having a discussion (see my previous comments) with Roko if one should tell people directly if they are dumb or try to communicate such a truth differently.

Replies from: wedrifid
comment by wedrifid · 2010-11-26T14:02:22.256Z · LW(p) · GW(p)

Note polite enough to lie but polite enough to leave off all the caveats and exceptions. Some here could, understand the sentence even with no education in mathematics. Even so, the essentials of what I said was sincere. Piecing together that kind of jargon from the scraps of information available in the context is a far harder task than just understanding the article itself.

comment by John_Maxwell (John_Maxwell_IV) · 2010-11-24T08:32:19.844Z · LW(p) · GW(p)

I think it's odd that LessWrong spends so much time pondering whether or not it should exist! Most blogs don't do that; most communities (online or otherwise) don't do that. And if a person did that, you'd consider her rather abnormal. I view such discussion as noise; or at least a sidebar to more interesting topics.

Oh come on. You really think the fact that no one else is doing it means it is a bad idea?

And besides, Hacker News also has periodic controversies over the fact that some of its users read it instead of hacking. My guess is that any forum populated by ambitious people will have periodic controversies over whether it should be killed off/re-channeled/etc. And that's a good thing.

Replies from: None
comment by [deleted] · 2010-11-24T19:52:06.232Z · LW(p) · GW(p)

That is a good point. Although generally I'm a fan of conformity; it's often a sign that you're doing things right.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2010-11-26T21:42:56.132Z · LW(p) · GW(p)

Sure, but that should be a very weak heuristic.

If you sample from the set of online communities you know of, you'll tend to see bigger and longer lasting ones more frequently than smaller and shorter-lasting ones. So by conforming with online communities you see, you're making your community larger and longer-lasting. That's not obviously a good thing.

comment by AnnaSalamon · 2010-11-19T17:00:08.332Z · LW(p) · GW(p)

And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this[.]

My post didn't claim Less Wrong contains the best rationalists anywhere. It claimed that for many readers, Less Wrong is the best community of aspiring rationalists that they have easy access to. I wish you would be careful to be clear about exactly what is at issue and to avoid straw man attacks.

As to how to evaluate Less Wrongers’, or others’, rationality skills: It is hard to assess others’ rationality by evaluating their opinions on a small number of controversial issues. This difficulty stems partly from the difficulty of oneself determining the right answers (so as to know whether to raise or lower one’s estimate of others with those views). And it stems in part from the fact that a small number of yes/no or multiple-choice-style opinions will provide only limited evidence, especially given communities’ tendency to copy the opinions of others within the community.

One can more easily notice what processes LWers and others follow, and one can ask whether these processes are likely to promote true beliefs. For example, LWers tend to say they’re aiming for true beliefs, rather than priding themselves in their faith, optimism, etc. Also, folks here do an above-average job of actually appearing curious, of updating their claims in response to evidence, of actively seeking counter-evidence, of separating claims into separately testable/evaluable components, etc.

At the risk of repeating myself: it is these processes that I, and at least some others, have primarily learned from the sequences/OB/LW. This material has helped me learn to actually aim for accurate beliefs, and it has given me tools for doing so more effectively. (Yes, much of the material in the sequences is obvious in some sense; but reading the sequences moved it from “somewhat clear when I bothered to think about it” to actually a part of my habits for thinking.) I’m a bit frustrated here, but my feeling is that you are not yet using these habits consistently in your writing -- you don’t appear curious, and you are not carefully factoring issues into separable claims that can be individually evaluated. If you do, we might make more progress talking together!

Replies from: multifoliaterose
comment by multifoliaterose · 2010-11-19T17:29:56.571Z · LW(p) · GW(p)

My impression is that XiXiDu is curious and that what you're frustrated by has more to do with his difficulty expressing himself than with closed-mindedness on his part. Note that he compiled a highly upvoted list of references and resources for Less Wrong - I read this as evidence that he's interested in Less Wrong's mission and think that his comments should be read more charitably.

I'll try to recast what I think he's trying to say in clearer terms sometime over the next few days

Replies from: AnnaSalamon, XiXiDu
comment by AnnaSalamon · 2010-11-19T19:43:50.917Z · LW(p) · GW(p)

I agree with you, actually. He does seem curious; I shouldn't have said otherwise. He just also seems drawn to the more primate-politics-prone topics within Less Wrong, and he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.

Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.

Replies from: multifoliaterose, wedrifid
comment by multifoliaterose · 2010-11-20T06:53:06.560Z · LW(p) · GW(p)

He just also seems drawn to the more primate-politics-prone topics within Less Wrong

Arguably the primate-politics-prone topics are the most important ones; the tendency that you describe can be read as seriousness of purpose.

he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.

Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.

Agreed.

comment by wedrifid · 2010-11-19T19:51:33.012Z · LW(p) · GW(p)

Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.

Not to mention more pragmatic socially in the general case. Unless you believe you have the capacity to be particularly dominant in a context and wish to introduce yourself near the top of a hierarchy. Some people try that here from time to time, particularly those who think they are impressive elsewhere. It is a higher risk move and best used when you know you will be able to go and open a new set, I mean community, if your dominant entry fails.

Replies from: shokwave
comment by shokwave · 2010-11-20T06:34:31.604Z · LW(p) · GW(p)

Some people try that here from time to time,

Confession: Having a few muddled ideas of signalling in mind when I joined LessWrong, I knew of this pattern (works really well at parties!) and decided that people here were too savvy, so I specifically focused on entering as low as possible in the hierarchy. I'm curious whether that was well-received because of various status reasons (made others feel higher-status) or because it was simply more polite and agreeable.

comment by XiXiDu · 2010-11-19T20:37:14.071Z · LW(p) · GW(p)

I'll try to recast what I think he's trying to say in clearer terms sometime over the next few days

Quick Summary (Because I wanted to ask you about Baez anyway / ~off-topic regarding the OP):

Why does someone like me, someone who has no formal education, understand the importance of research on friendly AI and the risks posed by AGI research and someone like John Baez (top mathematician) tries to save the planet from risks that I believe can be neglected. That is what I'm very curious about. It might appear differently from what I've been saying here in the past, but I'm only taking a different position to get some feedback. I really do not disagree with anything on Less Wrong. I'm unable to talk to those people and ask them, but I can challenge you people in their name and see what feedback I get.

What's interesting here is that the response I got so far made me doubt if my agreement with Less Wrong and the SIAI is as sane as I believed to think. I also started to doubt that Eliezer Yudkowsky is as smart as I though, and I thought he's the smartest person alive. It's just that the best the people here can come up with is telling you to read the sequences, complain about how you say something rather than what you are saying, tell you that people who disagree are intellectual impotent, or just state that they don't have to convince you (no shit Sherlock!).

So why have I commented on this post? I'm trying to improve LW through my own perception and what I noticed about outsiders I chatted with about LW (which is probably a rationalization, the real reason being that the attitude here pisses me off). Also what I'm most curious about is the strong contrast between LW and the academia. It just seems wrong that people who'd really need to know what LW has to say are not educated enough and those who are don't care or doubt what is being said. I'm in-between here and wonder about my own sanity. Yet I don't care enough and am too lazy to take enough effort to better express myself. But nobody else seems to be doing it. Even to me, who agrees (yes I do) this place often seems like an echo chamber that does response to critics with cryptic messages or the rationality equivalent of grammar police.

I'll try my best leave you alone now, I was just too tired today to do much else and commented here again (I only wanted to check for new posts and just saw two that could both be a tract from Jehovah's Witnesses minus some image of Yudkowsky riding a donkey ;-) Argh, why am I writing this? Grrr I have to shut up now. Sorry I can't resist, here goes...

Replies from: multifoliaterose, David_Gerard, XiXiDu, timtyler
comment by multifoliaterose · 2010-11-20T07:27:02.291Z · LW(p) · GW(p)
  1. Though there are many brilliant people within academia, there is also shortsightedness and group-think within academia which could have led the academic establishment to ignore important issues concerning safety of advanced future technologies.

  2. I've seen very little (if anything) in the way of careful rebuttals of SIAI's views from the academic establishment. As such, I don't think that there's strong evidence against SIAI's claims. At the same time, I have the impression that SIAI has not done enough to solicit feedback from the academic establishment.

  3. John Baez will be posting an interview with Eliezer sometime soon. It should be informative to see the back and forth between the two of them.

  4. Concerning the apparent group think on Less Wrong: something relevant that I've learned over the past few months is that some of the vocal SIAI supporters on LW express views that are quite unrepresentative of those of the SIAI staff. I initially misjudged SIAI on account of past unawareness of this point.

  5. I believe that if you're going to express doubts and/or criticism about LW and/or SIAI you should take the time and energy to express these carefully and diplomatically. Expressing unclear or inflammatory doubts and/or criticism is conducive to being rejected out of hand. I agree with Anna's comment here.

Replies from: XiXiDu
comment by XiXiDu · 2010-11-20T10:11:58.026Z · LW(p) · GW(p)

John Baez will be posting an interview with Eliezer sometime soon. It should be informative to see the back and forth between the two of them.

Wow, that's cool! They read my mind :-)

comment by David_Gerard · 2010-11-19T21:35:52.891Z · LW(p) · GW(p)

Even Eliezer Yudkowsky doesn't believe he's the smartest person alive. He's the founder of the site and set its tone early, but that's not the same thing.

Finding people smarter than oneself is essential to making oneself more effective and stretching one's abilities and goals.

For an example I'm closely familiar with: I think one of Jimmy Wales' great personal achievements with Wikipedia, as an impressively smart fellow himself, is that he discovered an extremely efficient mechanism for gathering around him people who made him feel really dumb by comparison. He'd be first to admit that a lot of those he's gathered around him outshine him.

Getting smarter people than yourself to sign up for your goals is, I suspect, one marker of success in selecting a good goal.

comment by XiXiDu · 2010-11-20T10:22:36.641Z · LW(p) · GW(p)

Please judge the above comment as a temporary lapse of sanity. I'm really sorry I failed again. But it's getting better. After turning off my PC I told myself dozens of times what an idiot I am. I always forget who I am and who you people are. When I read who multifoliaterose is I wanted to sink into the ground for that I even dare to bother you people with my gibberish.

I guess you overestimate my education and intelligence and truly try to read something into my comments that isn't there. Well, never mind.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-11-20T15:52:18.447Z · LW(p) · GW(p)

But it's getting better.

I agree; the average quality of your comments and posts has been increasing with time and I commend you for this.

When I read who multifoliaterose is I wanted to sink into the ground for that I even dare to bother you people with my gibberish.

This statement carries the connotation that I'm very important. At present I don't think that there's solid evidence in this direction. In any case; no need to feel self-conscious about taking my time, I'm happy to make your acquaintance and engage with you.

comment by timtyler · 2010-11-20T13:10:59.359Z · LW(p) · GW(p)

http://johncarlosbaez.wordpress.com/

...seems to be all about global warming. I rate that as a top dud cause - but there is a lot of noise - and thus money, fame, etc - associated with it - so obviously it will attract those interested in such things.

If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism. People like to associate themselves with grand causes for reasons that apparently have a lot to do with social signalling and status - and very little to do with the world actually being at risk.

Some take it too far: http://en.wikipedia.org/wiki/Messiah_complex

Replies from: Perplexed, Vladimir_Nesov, shokwave
comment by Perplexed · 2010-11-21T17:06:14.647Z · LW(p) · GW(p)

If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism.

Surely the skepticism should be directed toward the question of whether their recipe actually does save the world, rather than against their motivation. I don't think that an analysis of motivations for something like this even begins to pay any rent.

Replies from: timtyler
comment by timtyler · 2010-11-21T18:17:07.523Z · LW(p) · GW(p)

For me, this is a standard technique. Whenever someone tells me how altruistic they are or have been, I try and figure out which replicators are likely to be involved in the display. It often makes a difference whether someone's brain has been hijacked by memes - whether they are signalling their status to prospective business partners, their wealth to prospective mates - or whatever.

For example, if they are attempting to infect me with the same memes that have hijacked their own brain, my memetic immune system is activated - whereas if they are trying to convince people what a fine individual they are, my reaction is different.

comment by Vladimir_Nesov · 2010-11-20T14:03:45.646Z · LW(p) · GW(p)

What you said seems fine, but not the reason why you chose to say it in this context, the implied argument. The form of expression makes it hard to argue with. Say it out loud.

Replies from: timtyler
comment by timtyler · 2010-11-20T14:18:54.621Z · LW(p) · GW(p)

There is more from me on the topic in my "DOOM!" video. Spoken out loud, nontheless ;-)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-20T14:38:54.439Z · LW(p) · GW(p)

This doesn't address the problem with that particular comment. What you implied is well-known, the problem I pointed out was not that it's hard to figure out, but that you protected your argument in a weasely form of expression.

Replies from: timtyler
comment by timtyler · 2010-11-20T14:50:01.165Z · LW(p) · GW(p)

It sounds as though you would like to criticise an argument that you think I am implicitly making - but since I never actually made the argument, that gives you an amorphous surface to attack. I don't plan to do anything to assist with that matter just now - other priorities seem more pressing.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-20T14:55:02.083Z · LW(p) · GW(p)

It sounds as though you would like to criticise an argument that you think I am implicitly making - but since I never actually made the argument, that gives you an amorphous surface to attack. I don't plan to do anything to assist with that matter just now - other priorities seem more pressing.

Yes, that's exactly the problem. We all should strive to make our arguments easy to attack, errors easy to notice and address. Not having that priority hurts epistemic commons.

Replies from: timtyler
comment by timtyler · 2010-11-20T15:17:00.966Z · LW(p) · GW(p)

My argument was general - I think you want something specific.

However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me - by occupying my time with matters of relatively minor significance.

It would be nice if I had time available to devote to such tasks - but in the mean time, I am pretty sure the epistemic commons can get along without my additional input.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-20T15:43:16.229Z · LW(p) · GW(p)

However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me - by occupying my time with matters of relatively minor significance.

Since significance of the matter is one of the topics under discussion, it can't be used as an argument.

Edit: But it works as an element of a description of why certain actions take place.

Replies from: timtyler
comment by timtyler · 2010-11-20T15:59:03.487Z · LW(p) · GW(p)

What I mean is that I assign the matter relatively minor significance - so I get on with other things.

I am not out to persuade others whether my analysis is correct - again, I have other things to do than publicly parade an analysis of my priorities.

Maybe my priority analysis is correct. Maybe my priority analysis is wrong. In either case, it is my main reason for not doing such tasks.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-20T16:12:10.547Z · LW(p) · GW(p)

What I mean is that I assign the matter relatively minor significance - so I get on with other things.

Yes, I indeed made a mistake by missing this aspect (factual description of how a belief caused actions as opposed to normative discussion of actions given the question of correctness of the belief).

As a separate matter, I don't believe the premise is correct (that any additional effort is required to phrase things non-weasely), and thus that the belief in question plays even the explanatory role. But this is also under discussion, so I can't use that as an argument.

comment by shokwave · 2010-11-21T15:57:33.728Z · LW(p) · GW(p)

If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism.

Well, yes, but if someone tells you they are the tallest person in the world, you also should treat that with considerable scepticism. After all, there can only be one person who actually is the tallest person in the world, and it's unlikely in the extreme that one random guy would be that person. A one-in-six-billion chance is small enough to reject out-of-hand, surely!

The guy looks pretty tall though. How about you get out a tape-measure and then consult the records on height?

"Considerable scepticism" is not an argument against a claim. It is an argument for more evidence. What evidence makes John Baez's claims that he is trying to save the world more likely to be signalling than a genuine attempt?

Replies from: timtyler
comment by timtyler · 2010-11-21T16:43:58.427Z · LW(p) · GW(p)

If someone I met told me they were the tallest person in the world, I would indeed treat that with considerable scepticism. I would count my knowledge about the 7 billion people in the world as evidence weighing heavily against the claim.

Replies from: shokwave
comment by shokwave · 2010-11-21T17:13:18.624Z · LW(p) · GW(p)

Your 7 billion people are just your prior probability for him being the tallest before you actually examine his size. Once you have seen that he is somewhat tall, you can start developing a better prior:

If he's taller than any of the people you know that puts him in at least the top three hundredth - so less than 24 million people remain as contenders. If he's taller than anyone you've ever seen, that puts him in at least the top two thousandth - so less than 3.5 million of that 7 billion are actually potential evidence he's wrong.

So now our prior is 1 in 3.5 million. Now it's time to look for evidence. At this point, the number of people in the world is irrelevant: it's already been factored into the equation. What evidence can we use to find our posterior probability?

A cool thing about Bayesian reasoning is that you can cut extreme numbers down to reasonable sizes with some very cheap and very quick tests. In the case of possible ulterior motives for claiming to be saving the world, you can with some small effort distinguish between the "signalling" and "genuine" hypotheses. What tests - what evidence - should we be looking for here, to spot which one is the case?

comment by Perplexed · 2010-11-20T20:49:48.867Z · LW(p) · GW(p)

How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this but did provide no evidence for outsiders to follow through on it. The only exceptional and novel thesis to be found on LW concerns decision theory which is not only buried but which one is not able to judge if it is actually valuable as long as one doesn't have a previous education. The only exceptional and novel belief (prediction) on here is that regarding the risks posed by AGI.

Obviously you cannot form a good judgment as to whether a person is a good rationalist by determining whether his opinion on a difficult subject matches your opinion. And, even more obviously, you can't do so based on Anna's authority.

Instead, you need to interact with the person on an issue of intermediate difficulty and notice whether what he says clears cobwebs from your mind and shines light in dark corners. Or whether you come away from the conversation more confused and in the dark than before.

You may notice that I am implicitly defining rationalism in terms of how well a person communicates rather than how well they think. And even more than that, I am focusing on how well he communicates with you, rather than how well he communicates in general. If you wish, you can object, saying "That is not rationalism". Well, perhaps not. But it is the characteristic you should seek out in your interlocutors.

comment by Eneasz · 2010-12-01T18:39:15.206Z · LW(p) · GW(p)

To give a summary. Everyone interested to consider reading the sequences won't be able to spot much that he/she doesn't already know or that seems unique. The people who Less Wrong would help the most do not have the necessary education to understand it.

I disagree strongly. Using myself as my only data-point (flawed, I know, but deeply relevant for me) the exact opposite is true. I had enough education to understand (nearly) everything (some of the more advanced math took extra study to follow). But I had never been exposed to such a large amount of concentrated sanity in writing. The greatest asset of LW wasn't that it provided education I didn't have, but rather that it provided sanity I'd never been exposed to. That made a huge difference.

comment by PhilGoetz · 2010-11-21T17:50:48.914Z · LW(p) · GW(p)

I have no formal education but what I've so far read of LW does not seem very impressive in the sense that there was nothing to disagree with me so that I could update my beliefs and improve my decisions. I haven't come across any post that gave me some feeling of great insight, most of it was either obvious or I figured it out myself before (much less formally of course).

That shouldn't happen, because many posts are controversial and/or have little support, and some posts contradict each other.

Replies from: XiXiDu, Vladimir_Nesov
comment by XiXiDu · 2010-11-21T19:40:47.714Z · LW(p) · GW(p)

I haven't been specific in what I said. There actually have been some posts that introduced me to new concepts and allowed me to feel more satisfied to believe certain things. Only because of LW I was able to compile this curriculum. Although there are many more insightful and novel comments in my opinion than there are posts. I don't want to appear arrogant here or downplay the value of Less Wrong. I actually believe it is one of the most important resources. I just haven't read enough of LW yet to notice any capital contradictions. That also means that there might be great insights I haven't come across yet. I also don't think that most ideas here need much support (the top-ranked post seems to be an outlier). But take a look at some popular posts, where do you disagree or what have they taught you that you didn't already come up with on your own? Take for example the Ugh fields. Someone like me who managed to abandon religion without any help on his own reads that post, agrees wholeheartedly and upvotes it. But has it helped me? No, I'm rather a person that naturally takes this attitude too serious, I consciously overthink things until I completely leave near-mode and operate in far-mode only. I thought your post on self-fulfilling correlations was awesome. But there was no novel insight for me in it either. I know lots of people who should read your post and would benefit from it a lot. But such people won't read it. People like me who visit a psychologist because they know they need help won't be surprised by the movie Contact when Jodie Foster admits it could have all been some illusion. People like me are naturally aware that they could be dreaming. Doubt and the possibility of self-delusion are fundamental premises. But the people who'd really have to go to a psychologist, or read Less Wrong, believe they are perfectly normal or don't need to be told anything.

What I'm trying to say is that if Less Wrong wants to change the world rather than being a place where hyper-rationalists can collectively pat their back, you need to think about how to reach the people who need to know about it. And you need feedback, you have to figure out why people like Ben Goertzel fail to share some conclusions being made here and update accordingly.

Replies from: timtyler
comment by timtyler · 2010-11-21T19:57:53.026Z · LW(p) · GW(p)

Were there ever any references identifying the Scary Idea as an official SIAI belief?

I think that - if they comment at all - they would come back with something like:

OK - so you don't think that unconstrained machine intelligence is "highly likely" to automatically DESTROY ALL LIFE AS WE KNOW IT. So: what do you think the chances of that happening are?!?

Replies from: XiXiDu
comment by XiXiDu · 2010-11-22T10:57:43.864Z · LW(p) · GW(p)

Does Eliezer believe that working on friendly AI and supporting friendly AI research is the most important and most rational way to positively influence the future of humanity? If he thinks so, then is it reasonable to suspect that his rationale for starting to write on matters of rationality was to plead his case for friendly AI research and convince other people that it is indeed the most effective way to help humankind? If not, what was his reason to start blogging on Overcoming Bias and Less Wrong? Why has he spent so much time helping people to become less wrong rather than working directly on friendly AI? How can you be less wrong and still doubt that you should support friendly AI research?

I still suspect that everything he does is a means to an end. I'm also the opinion that if one reads all of Less Wrong and is afterwards (in the case one wants to survive and benefit humanity) still unable to conclude that the best way to do so is by supporting the SIAI, then either one did not understand due to a lack of intelligence or Less Wrong failed to convey its most important message. Therefore you should listen to the people who have read Less Wrong and disagree. You should also try to reach the people who haven't read Less Wrong but should read it because they are in a position that makes it necessary for them to understand the issues in question.

Replies from: timtyler
comment by timtyler · 2010-11-22T19:28:17.895Z · LW(p) · GW(p)

Well, I tend to think that that working on and supporting machine intelligence research is probably the most important way to positively influence the future of civilisation. The issue of what we want the machines to do is a part of the project.

So, such beliefs don't seem particularly "far out" - to me.

FWIW, Yudkowsky describes his motivation in writing about rationality here:

http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/

comment by Vladimir_Nesov · 2010-11-21T18:03:27.019Z · LW(p) · GW(p)

That shouldn't happen

...and is therefore evidence against your model (for what it's worth).

Replies from: orthonormal
comment by orthonormal · 2010-11-22T01:17:40.520Z · LW(p) · GW(p)

It could also be an instance of this.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-22T01:41:12.200Z · LW(p) · GW(p)

Aren't these one and the same? I had "for what it's worth" in there to account for uncertainty and probable weakness of the effect.

Replies from: orthonormal
comment by orthonormal · 2010-11-22T01:42:46.346Z · LW(p) · GW(p)

Denotation and connotation.

comment by wedrifid · 2010-11-19T17:36:06.791Z · LW(p) · GW(p)

The basic problem here is that the necessary education to follow Less Wrong will not only teach me to be wary of the arguments on Less Wrong but will also preclude me to act on the suggestions. How so? The main consensus here seems to be Cryonics and the dangers of AGI research. If it isn't, then at least the top rationalist on Less Wrong isn't as rational as suggested which undermines the whole intention of the original post. So I'll right now assume that those two conclusions are the most important you can arrive at by learning from Less Wrong. Consequentially this means that someone like me should care to earn enough money to support friendly AI research and to buy a Cryonics contract. But this is directly opposed to what I would have to do to to arrive at those conclusions and be reasonable sure about their correctness. Amongst other things I would have to study which would not allow me to earn enough money for many years.

This seems to be nonsensical. In most of the relevant cultures of the readership one need not direct overwhelming amounts of one's time to acquiring resources for cryonics membership.

There is always going to be a trade off between spending time deciding what is the best thing to do and actually doing it. If you think you are best served doing personal development and educating yourself so that you can best direct your other efforts then do so. If not, don't. It doesn't seem to be a lesswrong specific problem.

comment by wedrifid · 2010-11-19T17:29:33.404Z · LW(p) · GW(p)

"I am tempted to say that a doctorate in AI would be negatively useful, but I am not one to hold someone's reckless youth against them - just because you acquired a doctorate in AI doesn't mean you should be permanently disqualified." Eliezer Yudkowsky, So You Want To Be A Seed AI Programmer

How should I interpret the above quote? If someone has to be able to follow the advanced arguments on Less Wrong to understand that an advanced education is disadvantageous yet necessary to understand this in the first place, how does Less Wrong help in deciding what to do?

Your line of questioning here just seems strange in the context of the quote. The quote seems straightforward and not even all that relevant to whether lesswrong is useful for people who struggle to understand lesswrong. To the kind of people who have even a remote possibility of doing useful work on a seed AI it is just a trivial statement of Eliezer's personal opinion. While many don't agree with him Eliezer has written elsewhere on his opinion on academic orthodoxy as well as his own development with respect to approach to AI. Such opinions can just be taken with a grain of salt as they would be from anyone else.

This is just an example of what I experience regarding Less Wrong. I'm unable to follow much of Less Wrong yet I'm told that it can help me decide what to do.

There is value in making things as accessible as possible where this can be done without sacrificing the depth of the content. At the same time there are always going to be people who are not capable of following content of complex topics, whether that be on rationality or anything else. Ultimately all communities whether online or off have a target demographic and are not for everyone.

comment by wedrifid · 2010-11-19T17:11:16.603Z · LW(p) · GW(p)

Shaping the Singularity is not on par with having a positive impact on the distant poor.

By this I assume you mean that trying to have a positive impact on the distant poor is largely futile and trying to shape a positive singularity is orders of magnitude more useful, even if the former doesn't do more harm than good.

comment by multifoliaterose · 2010-11-18T23:22:18.469Z · LW(p) · GW(p)

Thanks for making this post. I especially like the paragraph:

There is similarly no easy way to use the “try it and see” method to sort out what ethics and meta-ethics to endorse, or what long-term human outcomes are likely, how you can have a positive impact on the distant poor, or which retirement investments really will be safe bets for the next forty years. For these goals we are forced to use reasoning, as failure-prone as human reasoning is. If the issue is tricky enough, we’re forced to additionally develop our skill at reasoning -- to develop “epistemic rationality”.

comment by Louie · 2010-11-20T12:50:24.347Z · LW(p) · GW(p)

Thanks for your post Anna. It really got me thinking. I was going to write you a long comment here but it got too long so I made it into a new top level post.

And don't let the concern trolls get you down... remember... they totally support us!

Replies from: Vaniver, Armok_GoB
comment by Vaniver · 2010-11-24T20:52:24.658Z · LW(p) · GW(p)

It seems to me that even if someone is a concern troll, you gain nothing from a rationality standpoint by identifying them as such, and often lose quite a bit. Have no villains.

comment by Armok_GoB · 2010-11-24T20:42:33.616Z · LW(p) · GW(p)

I hadn't noticed this was what was going on. It really needs to be brought to attention in the relevant threads much more clearly.

comment by Aurini · 2010-11-23T23:22:20.621Z · LW(p) · GW(p)

I think a lot of the benefit I've derived from LessWrong is subconscious in nature - rational algorythms and basic Logic 101 are such a core part of the community that you wind up adopting them on a deeper level than you do just by learning about them. "If A then B, A therefore B, B =/= A" is easy to learn - 20 minutes at most for a particularly slow student, but applying it is a whole other story.

For example, an idea recently struck me during a conversation (I plan to write a more-detailed piece for my blog about this): in regards to Iraq, the alleged WMDs were a major source of debate, they were the causus belli, and back in the winterof 2002 their non-existence would have undermined the whole war. It only took about a year or two for proof of their non-existence to appear, but the debate continued raging for another five. Proof aside, most people still believed that WMDS - or 'something just like them' - actually existed.

With the pro-Iraq, pro-WMD side there was always the implied caveat "...but if I am wrong about WMDs, then I agree that the war is unjust." Their rigorous arguments, and dismissal of the facts, only make sense if the war in Iraq was contingent on WMDs being true. Certainly, there were people like Hitchens who supported the war for other reasons - but for most people WMDs were the be-all end-all argument for the war.

Nowadays you don't hear WMDs mentioned at all - presumably most people now accept that there were none - and yet those who believed in WMDs still support the war, or if they don't it's for different reasons - they aren't against the 2002 invasion, they're against the current situation.

Intellectual honesty on their part would require them to be against the war. Instead they a priori chose to be for the war, and WMDs were simply the argument they learned to vomit up. After that argument has been dismissed, they find other reasons to be for the war.

How does all of this relate to less wrong? Four years ago I don't think I would have noticed this. I'd taken classes in logic at the time, and I was well aware of a fuzzy version of "Politics is the Mindkiller" - but if I'd come up with such an idea, it would have taken a week of percolation. Nowadays this idea crystallized almost instantly between sips of beer.

LessWrong is mostly my 'play time' - but like most forms of play... well, most forms in the evolutionary setting, anyway - it's play that's focussed on real-world results.

I am quantifiably smarter for all the time I waste here - calling it a software patch or upgrade is bang on.

EDIT: Just wanted to say that I might have some of my dates wrong. For my blog I'll actually do the research so that I don't sound like an idiot, particularly in regards to the progress of the WMD debate. I'm fairly certain the evidence will back up my thoughts on the matter, but it is possible that my annecdotes don't reflect reality. Herp derp, probably wouldn't have thought of that without LW either.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-23T23:33:48.503Z · LW(p) · GW(p)

(As an out-of-context remark.)

"...but if I am wrong about WMDs, then I agree that the war is unjust."

Under the assumption that existence of WMDs justifies the war, the war was justified if the decision-makers believed that WMDs probably existed, as a result of a honest attempt to discern the truth. Whether WMDs actually existed is wholly irrelevant, except as evidence about state of knowledge of the decision-makers at the time.

(The decision to pull out of the war based on new evidence is a separate question, since situation is different.)

Replies from: Aurini
comment by Aurini · 2010-11-24T00:17:10.895Z · LW(p) · GW(p)

Except then you'd be right demanding they state something like "You were right, the war was unjustified; perhaps we need better monitoring of our leaders. My current stance is that the war should be continued for reasons X."

There are smart people who held consistent pro-war opinions, granted - personally I'm trying not to take a stance on the war itself, in this situation - but the majority are chock full of cognitive dissonance.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-24T00:19:02.025Z · LW(p) · GW(p)

There are certainly many problems associated with any heated debate. I only addressed one point (so your reply is not to me; at least, I didn't understand a single point you made in it, so perhaps something in there was intended to be on the point I addressed).

Replies from: Aurini
comment by Aurini · 2010-11-24T05:17:13.117Z · LW(p) · GW(p)

This is why I need to write a proper article about this. My first post was only a short hand sketch, which now leaves me feeling like the reverse-Homer Simpson ("Sorry if it SOUNDED sarcastic)."

:)

comment by Alicorn · 2010-11-18T23:17:10.907Z · LW(p) · GW(p)

[safe bets](link AAA ratings?)

This and other clues lead me to believe that this post was published inadvertently.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2010-11-19T00:16:35.738Z · LW(p) · GW(p)

Thanks, Alicorn. It was published advertently, but I'd failed to adequately check for typos; it's fixed now.

comment by Gleb_Tsipursky · 2014-11-03T00:20:16.788Z · LW(p) · GW(p)

To enrich these great points about how to get more epistemic rationality, I would suggest intentionally associating positive emotions with epistemic rationality practices.